As a molecular imaging modality, photoacoustic imaging has been in the spotlight because it can provide an optical contrast image of physiological information and a relatively deep imaging depth. However, its sensitivity is limited despite the use of exogenous contrast agents due to the background photoacoustic signals generated from non-targeted absorbers such as blood and boundaries between different biological tissues. Additionally, clutter artifacts generated in both in-plane and out-of-plane imaging region degrade the sensitivity of photoacoustic imaging. We propose a method to eliminate the non-targeted photoacoustic signals. For this study, we used a dual-modal ultrasound-photoacoustic contrast agent that is capable of generating both backscattered ultrasound and photoacoustic signal in response to transmitted ultrasound and irradiated light, respectively. The ultrasound images of the contrast agents are used to construct a masking image that contains the location information about the target site and is applied to the photoacoustic image acquired after contrast agent injection. In-vitro and in-vivo experimental results demonstrated that the masking image constructed using the ultrasound images makes it possible to completely remove non-targeted photoacoustic signals. The proposed method can be used to enhance clear visualization of the target area in photoacoustic images.A methodology for the assessment of cell concentration, in the range 5 to 100 cells/μl, suitable for in vivo analysis of serous body fluids is presented in this work. This methodology is based on the quantitative analysis of ultrasound images obtained from cell suspensions, and takes into account applicability criteria such as short analysis times, moderate frequency and absolute concentration estimation, all necessary to deal with the variability of tissues among different patients. Numerical simulations provided the framework to analyse the impact of echo overlapping and the polydispersion of scatterer sizes on the cell concentration estimation. The cell concentration range which can be analysed as a function of the transducer and emitted waveform used was also discussed. Experiments were conducted to evaluate the performance of the method using 7 μm and 12 μm polystyrene particles in water suspensions in the 5 to 100 particle/μl range. A single scanning focused transducer working at a central frequency of 20MHz was used to obtain ultrasound images. The method proposed to estimate the concentration proved to be robust for different particle sizes and variations of gain acquisition settings. The effect of tissues placed in the ultrasound path between the probe and the sample was also investigated using 3mm-thick tissue mimics. Under this situation, the algorithm was robust for the concentration analysis of 12 μm particle suspensions, yet significant deviations were obtained for the smallest particles.Forensic odontology is regarded as an important branch of forensics dealing with human identification based on dental identification. This paper proposes a novel method that uses deep convolution neural networks to assist in human identification by automatically and accurately matching 2-D panoramic dental X-ray images. Designed as a top-down architecture, the network incorporates an improved channel attention module and a learnable connected module to better extract features for matching. By integrating associated features among all channel maps, the channel attention module can selectively emphasize interdependent channel information, which contributes to more precise recognition results. The learnable connected module not only connects different layers in a feed-forward fashion but also searches the optimal connections for each connected layer, resulting in automatically and adaptively learning the connections among layers. Extensive experiments demonstrate that our method can achieve new state-of-the-art performance in human identification using dental images. Specifically, the method is tested on a dataset including 1,168 dental panoramic images of 503 different subjects, and its dental image recognition accuracy for human identification reaches 87.21% rank-1 accuracy and 95.34% rank-5 accuracy. Code has been released on Github. (https//github.com/cclaiyc/TIdentify).Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the localization algorithm to lighting variation. However, this is very tedious and time consuming. By using synthetic images, it is possible to easily accumulate a large variety of views under varying illumination and weather conditions. Despite continuously improving processing power and rendering algorithms, synthetic images do not perfectly match real images of the same scene, i.e., there exists a gap between real and synthetic images that also affects the accuracy of camera localization. To reduce the impact of this gap, we introduce "Real-to-Synthetic Feature Transform (REST)". REST is a fully connected neural network that converts real features to their synthetic counterpart. The converted features can then be matched against the accumulated database for robust camera localization. Our experimental results show that REST improves matching accuracy by approximately 28% compared to a naiive method. This result guarantees a robust camera localization over various illuminations.Chronic diseases evolve slowly throughout a patient's lifetime creating heterogeneous progression patterns that make clinical outcomes remarkably varied across individual patients. https://www.selleckchem.com/products/NXY-059.html A tool capable of identifying temporal phenotypes based on the patients' different progression patterns and clinical outcomes would allow clinicians to better forecast disease progression by recognizing a group of similar past patients, and to better design treatment guidelines that are tailored to specific phenotypes. To build such a tool, we propose a deep learning approach, which we refer to as outcome-oriented deep temporal phenotyping (ODTP), to identify temporal phenotypes of disease progression considering what type of clinical outcomes will occur and when based on the longitudinal observations. More specifically, we model clinical outcomes throughout a patient's longitudinal observations via time-to-event (TTE) processes whose conditional intensity functions are estimated as non-linear functions using a recurrent neural network.